13 research outputs found

    The complexity of joint computation

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 253-266).Joint computation is the ubiquitous scenario in which a computer is presented with not one, but many computational tasks to perform. A fundamental question arises: when can we cleverly combine computations, to perform them with greater efficiency or reliability than by tackling them separately? This thesis investigates the power and, especially, the limits of efficient joint computation, in several computational models: query algorithms, circuits, and Turing machines. We significantly improve and extend past results on limits to efficient joint computation for multiple independent tasks; identify barriers to progress towards better circuit lower bounds for multiple-output operators; and begin an original line of inquiry into the complexity of joint computation. In more detail, we make contributions in the following areas: Improved direct product theorems for randomized query complexity: The "direct product problem" seeks to understand how the difficulty of computing a function on each of k independent inputs scales with k. We prove the following direct product theorem (DPT) for query complexity: if every T-query algorithm has success probability at most 1-[epsilon] in computing the Boolean function f on input distribution [mu], then for [alpha] 0, the worst-case success probability of any [alpha]R₂(f)k-query randomized algorithm for f k falls exponentially with k. The best previous statement of this type, due to Klauck, Spalek, and de Wolf, required a query bound of O(bs(f)k). Our proof technique involves defining and analyzing a collection of martingales associated with an algorithm attempting to solve f*k. Our method is quite general and yields a new XOR lemma and threshold DPT for the query model, as well as DPTs for the query complexity of learning tasks, search problems, and tasks involving interaction with dynamic entities. We also give a version of our DPT in which decision tree size is the resource of interest. Joint complexity in the Decision Tree Model: We study the diversity of possible behaviors of the joint computational complexity of a collection f1,... , fk of Boolean functions over a shared input. We focus on the deterministic decision tree model, with depth as the complexity measure; in this model, we prove a result to the effect that the "obvious" constraints on joint computational complexity are essentially the only ones. The proof uses an intriguing new type of cryptographic data structure called a "mystery bin," which we construct using a polynomial separation between deterministic and unambiguous query complexity shown by Savický. We also pose a conjecture in the communication model which, if proved, would extend our result to that model. Limitations of Lower-Bound Methods for the Wire Complexity of Boolean Operators: We study the circuit complexity of Boolean operators, i.e., collections of Boolean functions defined over a common input. Our focus is the well-studied model in which arbitrary Boolean functions are allowed as gates, and in which a circuit's complexity is measured by its depth and number of wires. We show sharp limitations of several existing lower-bound methods for this model. First, we study an information-theoretic lower-bound method due to Cherukhin, which gave the first improvement over the lower bounds provided by the well-known superconcentrator technique for constant depths. (The lower bounds are still barelysuperlinear, however) Cherukhin's method was formalized by Jukna as a general lower-bound criterion for Boolean operators, the "Strong Multiscale Entropy" (SME) property. It seemed plausible that this property could imply significantly better lower bounds by an improved analysis. However, we show that this is not the case, by exhibiting an explicit operator with the SME property that is computable in constant depths whose wire-complexity essentially matches the Cherukhin-Jukna lower bound (to within a constant multiplicative factor, for depths d = 2,3 and for even depths d >/= 6). Next, we show limitations of two simpler lower-bound criteria given by Jukna: the "entropy method" for general operators, and the "pairwise-distance method" for linear operators. We show that neither method gives super-linear lower bounds for depth 3. In the process, we obtain the first known polynomial separation between the depth-2 and depth-3 wire complexities for an explicit operator. We also continue the study (initiated by Jukna) of the complexity of "representing" a linear operator by bounded-depth circuits, a weaker notion than computing the operator. New limits to classical and quantum instance compression: Given an instance of a decision problem that is too difficult to solve outright, we may aim for the more limited goal of compressing that instance into a smaller, equivalent instance of the same or a different problem. As a representative problem, say we are given Boolean formulas [psi]1,... ,[psi]t, each of length n << t, and we want to determine if at least one [psi]j is satisfiable. Can we efficiently reduce this "OR-SAT" question to an equivalent problem instance (of SAT or another problem) of size poly(n), independent of t? We call any such reduction a "strong compression" reduction for OR-SAT. This would amount to a major gain from compressing [psi]1,. .. , [psi]t jointly, since we know of no way to reliably compress an individual SAT instance. Harnik and Naor (FOCS '06/SICOMP '10) and Bodlaender, Downey, Fellows, and Hermelin (ICALP '08/JCSS '09) showed that the infeasibility of strong compression for OR-SAT would also imply limits to instance compression schemes for a large number of other, natural problems; this is significant because instance compression is a central technique in the design of so-called fixed-parameter tractable algorithms. Bodlaender et al. also showed that the infeasibility of strong compression for the analogous "AND-SAT" problem would establish limits to instance compression for another family of problems. Fortnow and Santhanam (STOC '08) showed that deterministic (or 1-sided error randomized) strong compression for OR-SAT is not possible unless NP C coNP/poly; the case of AND-SAT remained mysterious. We give new and improved evidence against strong compression schemes for both OR-SAT and AND-SAT; our method applies to probabilistic compression schemes with 2-sided error. We also give versions of these results for an analogous task of quantum instance compression, in which a polynomial-time quantum reduction must output a quantum state that, in an appropriate sense, "preserves the answer" to the input instance. We give quantitatively similar evidence against strong compression for AND- and OR-SAT in this setting, albeit under less well-studied hypotheses about the relationship between NP and quantum complexity classes. To prove all of these results, we exploit the information bottleneck of an instance compression scheme, using a new method to "disguise" information being fed into a compressive mapping.by Andrew Donald Drucker.Ph.D

    Probabilistically Checkable Proofs for Arthur-Merlin games and communication protocols

    No full text
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Includes bibliographical references (p. 59-62).Probabilistically Checkable Proofs (PCPs) are an important class of proof systems that have played a key role in computational complexity theory. In this thesis we study the power of PCPs in two new settings: Arthur-Merlin games and communication protocols. In the first part of the thesis, we give a 'PCP characterization' of AM analogous to the PCP Theorem for NP. Similar characterizations have been given for higher levels of the Polynomial Hierarchy, and for PSPACE; however, we suggest that the result for AM might be of particular significance for attempts to derandomnize this class. To test this notion, we pose some 'Randomized Optimization Hypotheses' related to our stochastic CSPs that (in light of our result) would imply collapse results for AM. Unfortunately, the hypotheses appear over-strong, and we present evidence against them. In the process we show that. if some language in NP is hard-on-average against circuits of size 2 [omega](n), en there exist hard-on-average optimization problems of a particularly elegant form. In the second part of the thesis, we study PCPs in the setting of communication protocols. Using techniques inspired by Dinur's proof of the PCP Theorem. we show that functions f (X, y) with nondeterministic circuits of size i have -distributed PCP protocols' of proof length O(poly(m)) in which each verifier looks at a constant number of proof positions. We show a complementary negative result: a distributed PCP protocol using a proof of length f, in which Alice and Bob look at k bits of the proof while exchanging t bits of communication, can be converted into a PCP-free randomized protocol with communication bounded by In both parts of the thesis, our proofs make use of a powerful form of PCPs known as Probabilistically Checkable Proofs of Proximity. and demonstrate their versatility. In our work on Arthur-Merlin games, we also use known results on randomness-efficient soundness- and hardness-amplification. In particular, we make essential use of the Impagliazzo-Wigderson generator; our analysis relies on a recent Chernoff-type theorem for expander walks.by Andrew Donald Drucker.S.M

    A Full Characterization of Quantum Advice

    No full text
    We prove the following surprising result: given any quantum state rho on n qubits, there exists a local Hamiltonian H on poly(n) qubits (e.g., a sum of two-qubit interactions), such that any ground state of H can be used to simulate rho on all quantum circuits of fixed polynomial size. In terms of complexity classes, this implies that BQP/qpoly is contained in QMA/poly, which supersedes the previous result of Aaronson that BQP/qpoly is contained in PP/poly. Indeed, we can exactly characterize quantum advice, as equivalent in power to untrusted quantum advice combined with trusted classical advice. Proving our main result requires combining a large number of previous tools -- including a result of Alon et al. on learning of real-valued concept classes, a result of Aaronson on the learnability of quantum states, and a result of Aharonov and Regev on "QMA+ super-verifiers" -- and also creating some new ones. The main new tool is a so-called majority-certificates lemma, which is closely related to boosting in machine learning, and which seems likely to find independent applications. In its simplest version, this lemma says the following. Given any set S of Boolean functions on n variables, any function f in S can be expressed as the pointwise majority of m=O(n) functions f1,...,fm in S, such that each fi is the unique function in S compatible with O(log|S|) input/output constraints.National Science Foundation (U.S.). Division of Mathematical Sciences (Grant No. 0844626)United States. Defense Advanced Research Projects Agency. Young Faculty AwardW.M. Keck FoundationAlfred P. Sloan Foundatio

    Managing expert knowledge : organizational challenges and managerial futures for the UK medical profession

    No full text
    The blurring of managerial and professional jurisdictions remains a significant area of organizational research. This process is often described as involving `re-stratification', the drawing of professional elites into bureaucratic roles; or `bureaucratization', the standardization of work operating procedures. We examine these processes further through considering how professional work is reordered through the application of knowledge management techniques, focusing in particular on the management of knowledge around clinical risk. We suggest attempts by hospital risk managers to manage medical knowledge towards organizational learning represent a significant challenge to clinical freedom, given the centrality of expert knowledge to professional autonomy. In considering this challenge, we are attentive to the idea that change occurs not through the top-down challenge of management, nor the bottom-up resistance of professionals, but through the dynamic mediation of these influences within a wider institutional context. Accordingly, we find that doctors respond to change through a number of situated responses that limit management control over knowledge and reinforce claims to medical autonomy. In extending professional jurisdiction for the management of knowledge, we show how professionals such as doctors can themselves become managerialized as they seek to stave off managerial encroachment. Rather than seeing professionals as being drawn into management roles or bureaucratic ways of working, we suggest that managerial techniques and jurisdictions are also strategically drawn into professional practice and identity

    CCmo Impulsar La Innovaciin Intraemprendedora En Organizaciones Que Aprenden (How to Foster Intrapreneurship Innovation in Learning Organizations)

    No full text

    Lessening Anxiety, Panic, and Complacency in Pandemics

    No full text

    Sociologists of the Unexpected: Edward A. Ross and Georg Simmel on the Unintended Consequences of Modernity

    No full text
    Groß M. Sociologists of the Unexpected: Edward A. Ross and Georg Simmel on the Unintended Consequences of Modernity. The American Sociologist. 2003;34(4):40-58
    corecore